Skip to content

Conversation

@whitneywhtsang
Copy link
Contributor

This PR change the Triton base from 206c410 to 251ec88 (Nov 20).
Pass rate: 93.23%

Please do not squash and merge this PR.

lezcano and others added 5 commits November 20, 2024 22:20
We unify it and simplify its API (it was taking an unused `shape`
parameter). While doing this, we found that the previous implementation
was incorrect at least for `AMDWmmaEncodingAttr`, as this layout was
using the shape parameter.

Interestingly enough the doc in the header file for this function noted
that the function is indeed independent of the tensor shape, even though
the function does take a shape as an input!

https://github.com/triton-lang/triton/blob/0bd30a2f3192204c5a50d5ffde27ad8493f6c026/include/triton/Dialect/TritonGPU/IR/Dialect.h#L113-L114
…ute API. (#5196)

The util function `getDistributedLayoutStr` uses the `DistributedLayout`
attribute interface, which is not flexible for third-party extensions.
Use the `getInDimSize` of the `LinearLayout`, which is better since the
legacy layout has been converted to the `LinearLayout`.

There is no new test case since it is only a change in API usage.
@whitneywhtsang whitneywhtsang merged commit 76c054e into main Nov 21, 2024
5 checks passed
@whitneywhtsang whitneywhtsang deleted the whitneywhtsang/merge branch November 21, 2024 04:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants